A representer theorem for deep kernel learning

نویسندگان

  • Bastian Bohn
  • Michael Griebel
  • Christian Rieger
چکیده

In this paper we provide a representer theorem for a concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces. This fundamental result serves as a first mathematical foundation for the analysis of machine learning algorithms based on compositions of functions. As a direct consequence of this new representer theorem, the corresponding infinite-dimensional minimization problems can be recast into (nonlinear) finite-dimensional minimization problems, which can be tackled with nonlinear optimization algorithms. In this context, we show how concatenated machine learning problems can be reformulated as neural networks and how our representer theorem applies to a broad class of state-of-the-art deep learning methods. We use our results to establish deep learning algorithms based on the composition of functions from reproducing kernel Hilbert spaces.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semi-supervised Penalized Output Kernel Regression for Link Prediction

Link prediction is addressed as an output kernel learning task through semi-supervised Output Kernel Regression. Working in the framework of RKHS theory with vectorvalued functions, we establish a new representer theorem devoted to semi-supervised least square regression. We then apply it to get a new model (POKR: Penalized Output Kernel Regression) and show its relevance using numerical experi...

متن کامل

When Is There a Representer Theorem? Vector Versus Matrix Regularizers

We consider a general class of regularization methods which learn a vector of parameters on the basis of linear measurements. It is well known that if the regularizer is a nondecreasing function of the inner product then the learned vector is a linear combination of the input data. This result, known as the representer theorem, is at the basis of kernel-based methods in machine learning. In thi...

متن کامل

Generalized Regularized Least-Squares Learning with Predefined Features in a Hilbert Space

Kernel-based regularized learning seeks a model in a hypothesis space by minimizing the empirical error and the model’s complexity. Based on the representer theorem, the solution consists of a linear combination of translates of a kernel. This paper investigates a generalized form of representer theorem for kernel-based learning. After mapping predefined features and translates of a kernel simu...

متن کامل

Regularized learning in Banach spaces as an optimization problem: representer theorems

We view regularized learning of a function in a Banach space from its finite samples as an optimization problem. Within the framework of reproducing kernel Banach spaces, we prove the representer theorem for the minimizer of regularized learning schemes with a general loss function and a nondecreasing regularizer. When the loss function and the regularizer are differentiable, a characterization...

متن کامل

Semi-Supervised Learning in Reproducing Kernel Hilbert Spaces Using Local Invariances

We propose a framework for semi-supervised learning in reproducing kernel Hilbert spaces using local invariances that explicitly characterize the behavior of the target function around both labeled and unlabeled data instances. Such invariances include: invariance to small changes to the data instances, invariance to averaging across a small neighbourhood around data instances, and invariance t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1709.10441  شماره 

صفحات  -

تاریخ انتشار 2017